57 research outputs found

    Explorations in multimodal information presentation

    Get PDF

    Illustrating answers: an evaluation of automatically retrieved illustrations of answers to medical questions

    Get PDF
    In this paper we discuss and evaluate a method for automatic text illustration, applied to answers to medical questions. Our method for selecting illustrations is based on the idea that similarities between the answers and picture-related text (the picture’s caption or the section/paragraph that includes the picture) can be used as evidence that the picture would be appropriate to illustrate the answer.In a user study, participants rated answer presentations consisting of a textual component and a picture. The textual component was a manually written reference answer; the picture was automatically retrieved by measuring the similarity between the text and either the picture’s caption or its section. The caption-based selection method resulted in more attractive presentations than the section-based method; the caption-based method was also more consistent in selecting informative pictures and showed a greater correlation between user-rated informativeness and the confidence of relevance of the system.When compared to manually selected pictures, we found that automatically selected pictures were rated similarly to decorative pictures, but worse than informative pictures

    Production and evaluation of (multimodal) answers to medical questions

    Get PDF
    This paper describes two experiments carried out to investigate the production and evaluation of multimodal answer presentations in the context of a medical question answering system. In a production experiment participants had to produce answers to different types of questions. The results show that about one in four produced answers using multiple media. In an evaluation experiment, users had to evaluate different types of multimodal answer presentations. Answers with an informative visual were evaluated as more informative and more attractive than answers with a mere illustrative visual

    Too Informal? How a Chatbot’s Communication Style Affects Brand Attitude and Quality of Interaction

    Get PDF
    This study investigated the effects of (in)formal chatbot responses and brand familiarity on social presence, appropriateness, brand attitude, and quality of interaction. An online experiment using a 2 (Communication Style: informal vs. formal) by 2 (Brand: familiar vs. unfamiliar) between subject design was conducted in which participants performed customer service tasks with the assistance of chatbots developed for the study. Subsequently, they filled out an online questionnaire. An indirect effect of communication style on brand attitude and quality of interaction through social presence was found. Thus, a chatbot’s informal communication style induced a higher perceived social presence which in turn positively influenced quality of the interaction and brand attitude. However, brand familiarity did not enhance perceptions of appropriateness, indicating participants do not assign different roles to chatbots as communication partner

    Verbal redundancy in a procedural animation: On-screen labels improve retention but not behavioral performance

    Get PDF
    Multimedia learning research has shown that presenting the same words as spoken text and as written text to accompany graphical information hinders learning (i.e., redundancy effect). However, recent work showed that a “condensed” form of written text (i.e., on-screen labels) that overlaps with the spoken text, and thus is only partially redundant, can actually foster learning. This study extends this line of research by focusing on the usefulness of on-screen labels in an animation explaining a procedural task (i.e., first-aid procedure). The experiment had a 2 × 2 × 2 between-subject design (N = 129) with the factors spoken text (yes vs. no), written text (yes vs. no), and on-screen labels (yes vs. no). Learning outcomes were measured as retention accuracy and behavioral performance accuracy. Results showed that on-screen labels improved retention accuracy (but not behavioral performance accuracy) of the procedure, especially when presented together with spoken text. So, on-screen labels appear to be promising for learning from procedural animations
    • 

    corecore